Benchmarking flash SSDs (solid state disks) is significantly more difficult than benchmarking the conventional spinning disk, as the SSD
performance changes, often signficantly, while data is read and written to the SSD. Not performance changes over a timefram of months, but minute by
minute. It may be due to wear levelling and internal log-structured file systems. In any case, extra care has to be taken when benchmarking SSDs.
Each test needs to be long enough to expose the worst case performance of the SSD, and each test need to be performed after a ATA security erase, to
ensure the SSD starts the test with a clean slate. It is also necessary to fill up the SSD to test how it performs as the SSD free space gets used
up.
For many applications, peak performance for an initial few seconds from the SSD is not meaningful. If the SSD can provide 10,000 IOPs for an
initial 2 seconds, but thereafter, for the rest of its life span without a ATA security erase, slow down sharply to 1200 IOPs; and waver up and down,
with an average of 2400 IOPs within an hour time span, the SSD may only be suitable for a server that needs at least 1200 IOPs.
Realtime applications like web, email, database may need to be sized to the worst case IOPs. On the other hand, servers that does batch jobs which
do not need interactions with users, and do not need consistent run times, may be more tolerant of inconsistent performance, including sharp drops in
IOPs, as long as the average IOPs figures are high.
In the followup posts, I will post results of benchmarking attempts for the
Intel X25-E and
Intel X25-M SSDs; with focus on finding a
configuration that provides stable 4K random 70/30 RW IOPs at a low queue depth, and is suitable for linux server (web, mail, database) usage.
IOmeter will be used as the benchmarking tool. It is time consuming, each run taking several hours, and I expect to continue testing configurations
for using SSDs in servers requiring high IOPs and low latencies.
Lim Wee Cheong
21st March 2010
|